Different types of mental rotation tests have been used extensively in psychology to understand human visual reasoning and perception. Understanding what an object or visual scene would look like from another viewpoint is a challenging problem that is made even harder if it must be performed from a single image. We explore a controlled setting whereby questions are posed about the properties of a scene if that scene was observed from another viewpoint. To do this we have created a new version of the CLEVR dataset that we call CLEVR Mental Rotation Tests (CLEVR-MRT). Using CLEVR-MRT we examine standard methods, show how they fall short, then explore novel neural architectures that involve inferring volumetric representations of a scene. These volumes can be manipulated via camera-conditioned transformations to answer the question. We examine the efficacy of different model variants through rigorous ablations and demonstrate the efficacy of volumetric representations.
translated by 谷歌翻译
逆运动学(IK)系统通常相对于其输入特征很僵硬,因此需要将用户干预适应新骨架。在本文中,我们旨在创建一个适用于各种人类形态的灵活的,学到的IK求解器。我们扩展了最先进的机器学习IK求解器,以在众所周知的皮肤多人线性模型(SMPL)上运行。我们称我们的模型SMPL-IK,并表明当集成到实时3D软件中时,该扩展系统为定义新型AI-Asissist Animation Workfrows提供了机会。例如,通过允许用户在摆姿势的同时修改性别和身体形状,可以使姿势创作更加灵活。此外,当使用现有姿势估计算法链接时,SMPL-IK通过允许用户从2D图像引导3D场景来加速摆姿势,同时允许进一步编辑。最后,我们提出了一种新颖的SMPL形状反转机制(SMPL-SI),将任意类人形特征映射到SMPL空间,使艺术家能够在自定义字符上利用SMPL-IK。除了显示拟议工具的定性演示外,我们还介绍了H36M和Amass数据集上的定量SMPL-IK基准。
translated by 谷歌翻译
几次学习的元学习算法旨在训练能够仅使用几个示例将新任务概括为新任务的神经网络。早期停滞对于性能至关重要,在对新任务分布达到最佳概括时停止模型训练。元学习的早期机制通常依赖于从训练(源)数据集中绘制的元验证集中的标记示例上测量模型性能。这在几个射击传输学习设置中是有问题的,其中元测试集来自不同的目标数据集(OOD),并且可能会在元验证集中具有较大的分配转移。在这项工作中,我们提出了基于激活的早期停滞(ABE),这是使用基于验证的早期播放进行元学习的替代方法。具体而言,我们分析了每个隐藏层的神经激活期间的演变,在目标任务分布的一项任务中,在一组未标记的支持示例上,因为这构成了从最小值和合理的信息中。目标问题。我们的实验表明,有关激活的简单标签不可知统计提供了一种有效的方法来估计目标概括如何随着时间的推移如何发展。在每个隐藏层,我们从第一阶和二阶矩来表征激活分布,然后沿特征维度进一步汇总,从而在四维空间中产生紧凑而直观的表征。检测何时,在整个训练时间以及在哪个层上,目标激活轨迹与源数据的激活轨迹有所不同,使我们能够在大量的几个射击传输学习设置中执行早期停滞并改善概括,并在不同算法,源和目标数据集。
translated by 谷歌翻译
在本文中,我们探讨了基于GAN的少量数据增强用作改善少量分类性能的方法。我们对如何对这样的任务进行微调(其中一项是以课堂开采方式)进行微调的探索,以及对这些模型如何在改善几次分类的情况下进行严格的经验研究。我们确定了与纯粹有监督的制度训练此类生成模型的困难有关的问题,几乎没有例子,以及有关现有作品的评估协议的问题。我们还发现,在这种制度中,分类精度对数据集的类别随机分配方式高度敏感。因此,我们提出了一种半监督的微调方法,作为解决这些问题的更务实的方向。
translated by 谷歌翻译
预先训练的语言模型(LMS)通常逻辑地扭转或以组成方式概括。最近的工作表明,结合外部实体知识可以提高LMS的能力和推广。然而,明确提供实体抽象的效果仍然不清楚,特别是在最近的研究表明,预先训练的LMS已经在其参数中编码了一些知识。我们研究将实体型抽象的实用程序融入预先训练的变压器,并在需要不同形式的逻辑推理的四个NLP任务上测试这些方法:(1)与基于文本的关系推理(CLUTRR)的组成语言理解,(2)绑架推理(校对者),(3)多跳问题应答(HotpotQA),和(4)会话问题应答(COQA)。我们提出并经验探索了三种方法来添加此类抽象:(i)作为附加输入嵌入式,(ii)作为编码的单独序列,(iii)作为模型的辅助预测任务。总体而言,我们的分析表明,具有抽象实体知识的模型比没有它更好。然而,我们的实验还表明,强烈的益处取决于所使用的技术和手头的任务。与基线模型相比,最佳抽象意识模型分别达到了88.8%和91.8%的总精度,分别在CLUTRR和校对者上实现了62.3%和89.8%。此外,抽象感知模型在插值和外推设置中显示出改善的组成概括。然而,对于热杆菌和CoQA,我们发现F1分数平均仅提高0.5%。我们的结果表明,明确抽象的好处在正式定义的逻辑推理设置中需要许多推理跳跃,但指向它对具有较少正式逻辑结构的NLP任务不利的概念。
translated by 谷歌翻译
强化学习的标准制定缺乏指定禁止和禁止行为的实用方式。最常见的是,从业者通过手动工程来指定行为规范的任务,这是一个需要几个迭代的反向直观的过程,并且易于奖励代理人。在这项工作中,我们认为,几乎完全用于安全RL的受限制的RL,也有可能大大减少应用加强学习项目中奖励规范所花费的工作量。为此,我们建议在CMDP框架中指定行为偏好,并使用拉格朗日方法,该方法寻求解决代理程序的策略和拉格朗日乘法器之间的最小问题,以自动称量每个行为约束。具体而言,我们研究了如何调整CMDP,以便解决基于目标的任务,同时遵守一组行为约束,并提出对Sac-Lagrangian算法的修改以处理若干约束的具有挑战性的情况。我们对这一框架进行了一系列持续控制任务,该任务与用于视频游戏中NPC设计的加固学习应用相关。
translated by 谷歌翻译
我们对学习协调的互动代理感兴趣,即$ BUILDER $ - 执行操作但忽略任务的目标 - 以及$架构师$指导建造者以朝着任务的目标指导。我们定义和探索正式的设置,其中人工代理配备了允许它们同时学习任务的机制,同时同时演变共享通信协议。实验符号学领域表明,从先验的未知指示中学习的人类熟练程度。因此,我们从中获取灵感并提出了建筑师构建器问题(ABP):一个不对称的设置,其中建筑师必须学习指导建设者朝构建特定结构。该架构师知道目标结构,但不能在环境中行动,只能向构建器发送任意消息。另一方面的建筑师可以在环境中采取行动,但没有关于手头的任务的知识,必须学会解决它依赖于架构师发送的消息。至关重要的是,消息的含义最初没有在代理商之间定义,而是必须在整个学习中进行协商。在这些约束下,我们建议建筑师构建器迭代(abig),一个解决方案到架构师 - 建筑师的问题,其中建筑师利用Builder的学习模型指导它,同时构建器使用自模仿学习来加强其导游行为。我们分析ABIG的关键学习机制,并在ABP的二维实例化中测试,其中任务涉及抓取立方体,将它们放在给定位置或构建各种形状。在这种环境中,ABIG导致低级,高频,指导通信协议,不仅使建筑师构建器对能够在手头上解决任务,而且还可以概括到未操作任务。
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Remote sensing imagery provides comprehensive views of the Earth, where different sensors collect complementary data at different spatial scales. Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a $5.0\%$ non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a $0.9$ mIoU to $3.8$ mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales.
translated by 谷歌翻译
With an ever-growing number of parameters defining increasingly complex networks, Deep Learning has led to several breakthroughs surpassing human performance. As a result, data movement for these millions of model parameters causes a growing imbalance known as the memory wall. Neuromorphic computing is an emerging paradigm that confronts this imbalance by performing computations directly in analog memories. On the software side, the sequential Backpropagation algorithm prevents efficient parallelization and thus fast convergence. A novel method, Direct Feedback Alignment, resolves inherent layer dependencies by directly passing the error from the output to each layer. At the intersection of hardware/software co-design, there is a demand for developing algorithms that are tolerable to hardware nonidealities. Therefore, this work explores the interrelationship of implementing bio-plausible learning in-situ on neuromorphic hardware, emphasizing energy, area, and latency constraints. Using the benchmarking framework DNN+NeuroSim, we investigate the impact of hardware nonidealities and quantization on algorithm performance, as well as how network topologies and algorithm-level design choices can scale latency, energy and area consumption of a chip. To the best of our knowledge, this work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa. The best results achieved for accuracy remain Backpropagation-based, notably when facing hardware imperfections. Direct Feedback Alignment, on the other hand, allows for significant speedup due to parallelization, reducing training time by a factor approaching N for N-layered networks.
translated by 谷歌翻译